17 research outputs found

    Local user mapping via multi-modal fusion for social robots

    Get PDF
    User detection, recognition and tracking is at the heart of Human Robot Interaction, and yet, to date, no universal robust method exists for being aware of the people in a robot surroundings. The presented work aims at importing into existing social robotics platforms different techniques, some of them classical, and other novel, for detecting, recognizing and tracking human users. These algorithms are based on a variety of sensors, mainly cameras and depth imaging devices, but also lasers and microphones. The results of these parallel algorithms are then merged so as to obtain a modular, expandable and fast architecture. This results in a local user mapping thanks to multi-modal fusion. Thanks to this user awareness architecture, user detection, recognition and tracking capabilities can be easily and quickly given to any robot by re-using the modules that match its sensors and its processing performance. The architecture provides all the relevant information about the users around the robot, that can then be used for end-user applications that adapt their behavior to the users around the robot. The variety of social robots in which the architecture has been successfully implemented includes a car-like mobile robot, an articulated flower and a humanoid assistance robot. Some modules of the architecture are very lightweight but have a low reliability, others need more CPU but the associated confidence is higher. All configurations of modules are possible, and fit the range of possible robotics hardware configurations. All the modules are independent and highly configurable, therefore no code needs to be developed for building a new configuration, the user only writes a ROS launch file. This simple text file contains all wanted modules. The architecture has been developed with modularity and speed in mind. It is based on the Robot Operating System (ROS) architecture, a de facto software standard in robotics. The different people detectors comply with a common interface called PeoplePoseList Publisher, while the people recognition algorithms comply with an interface called PeoplePoseList Matcher. The fusion of all these different modules is based on Unscented Kalman Filter techniques. Extensive benchmarks of the sub-components and of the whole architecture, using both academic datasets and data acquired in our lab, and end-user application samples demonstrate the validity and interest of all levels of the architecture.La detección, el reconocimiento y el seguimiento de los usuarios es un problema clave para la Interacción Humano-Robot. Sin embargo, al día de hoy, no existe ningún método robusto universal para para lograr que un robot sea consciente de la gente que le rodea. Esta tesis tiene como objetivo implementar, dentro de robots sociales, varias técnicas, algunas clásicas, otras novedosas, para detectar, reconocer y seguir a los usuarios humanos. Estos algoritmos se basan en sensores muy variados, principalmente cámaras y fuentes de imágenes de profundidad, aunque también en láseres y micrófonos. Los resultados parciales, suministrados por estos algoritmos corriendo en paralelo, luego son mezcladas usando técnicas probabilísticas para obtener una arquitectura modular, extensible y rápida. Esto resulta en un mapa local de los usuarios, obtenido por técnicas de fusión de datos. Gracias a esta arquitectura, las habilidades de detección, reconocimiento y seguimiento de los usuarios podrían ser integradas fácil y rápidamente dentro de un nuevo robot, reusando los módulos que corresponden a sus sensores y el rendimiento de su procesador. La arquitectura suministra todos los datos útiles sobre los usuarios en el alrededor del robot y se puede usar por aplicaciones de más alto nivel en nuestros robots sociales de manera que el robot adapte su funcionamiento a las personas que le rodean. Los robots sociales en los cuales la arquitectura se pudo importar con éxito son: un robot en forma de coche, una flor articulada, y un robot humanoide asistencial. Algunos módulos de la arquitectura son muy ligeros pero con una fiabilidad baja, mientras otros requieren más CPU pero son más fiables. Todas las configuraciones de los módulos son posibles y se ajustan a las diferentes configuraciones hardware que puede tener el robot. Los módulos son independientes entre ellos y altamente configurables, por lo que no hay que desarrollar código para una nueva configuración. El usuario sólo tiene que escribir un fichero launch de ROS. Este sencillo fichero de texto contiene todos los módulos que se quieren lanzar. Esta arquitectura se desarrolló teniendo en mente que fuese modular y rápida. Se basa en la arquitectura Robot Operating System (ROS), un estándar software de facto en la robótica. Todos los detectores de personas tienen una interfaz común llamada PeoplePoseList Publisher, mientras los algoritmos de reconocimiento siguen una interfaz llamada PeoplePoseList Matcher. La fusión de todos estos módulos se basa en técnicas de filtros de Kalman no lineares (Unscented Kalman Filters). Se han realizado pruebas exhaustivas de precisión y de velocidad de cada componente y de la arquitectura completa (realizadas sobre ambos bases de datos académicas además de sobre datos grabados en nuestro laboratorio), así como prototipos sencillos de aplicaciones finales. Así se comprueba la validez y el interés de la arquitectura a todos los niveles.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Fernando Torres Medina.- Secretario: María Dolores Blanco Rojas.- Vocal: Jorge Manuel Miranda Día

    Fast 3D cluster tracking for a mobile robot using 2D techniques on depth images

    Get PDF
    User simultaneous detection and tracking is an issue at the core of human-robot interaction (HRI). Several methods exist and give good results; many use image processing techniques on images provided by the camera. The increasing presence in mobile robots of range-imaging cameras (such as structured light devices as Microsoft Kinects) allows us to develop image processing on depth maps. In this article, a fast and lightweight algorithm is presented for the detection and tracking of 3D clusters thanks to classic 2D techniques such as edge detection and connected components applied to the depth maps. The recognition of clusters is made using their 2D shape. An algorithm for the compression of depth maps has been specifically developed, allowing the distribution of the whole processing among several computers. The algorithm is then applied to a mobile robot for chasing an object selected by the user. The algorithm is coupled with laser-based tracking to make up for the narrow field of view of the range-imaging camera. The workload created by the method is light enough to enable its use even with processors with limited capabilities. Extensive experimental results are given for verifying the usefulness of the proposed method.Spanish MICINN (Ministry of Science and Innovation) through the project ‘‘Applications of Social Robots=Aplicaciones de los Robots Sociales.’’Publicad

    Maggie: A Social Robot as a Gaming Platform

    Get PDF
    Edutainment robots are robots designed to participate in people's education and in their entertainment. One of the tasks of edutainment robots is to play with their human partners, but most of them offer a limited pool of games. Moreover, it is difficult to add new games to them. This lack of flexibility could shorten their life cycle. This paper presents a social robot on which several robotic games have been developed. Our robot uses a flexible and modular architecture that allows the creation of new skills by the composition of existing and simpler skills. With this architecture, the development of a new game mainly consists in the composition of the skills that are needed for this specific game. In this paper, we present the robot, its hardware and its software architecture, including its interaction capabilities. We also provide a detailed description of the development of five of the games the robot can play.The authors gratefully acknowledge the funds provided by the Spanish MICINN (Ministry of Science and Innovation) through the project "A new Approach to Social Robotics (AROS)". The research leading to these results has also received funding from the RoboCity2030-II-CM project(S2009/DPI-1559), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds ofthe EU

    Vision-based people detection using depth information for social robots: an experimental evaluation

    Get PDF
    Robots are starting to be applied in areas which involve sharing space with humans. In particular, social robots and people will coexist closely because the former are intended to interact with the latter. In this context, it is crucial that robots are aware of the presence of people around them. Traditionally, people detection has been performed using a flow of two-dimensional images. However, in nature, animals' sight perceives their surroundings using color and depth information. In this work, we present new people detectors that make use of the data provided by depth sensors and red-green-blue images to deal with the characteristics of human-robot interaction scenarios. These people detectors are based on previous works using two-dimensional images and existing people detectors from different areas. The disparity of the input and output data used by these types of algorithms usually complicates their integration into robot control architectures. We propose a common interface that can be used by any people detector, resulting in numerous advantages. Several people detectors using depth information and the common interface have been implemented and evaluated. The results show a great diversity among the different algorithms. Each one has a particular domain of use, which is reflected in the results. A clever combination of several algorithms appears as a promising solution to achieve a flexible, reliable people detector.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research leading to these results has received funding from the projects Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad, and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU

    A local user mapping architecture for social robots

    Get PDF
    User detection, recognition, and tracking is at the heart of human-robot interaction, and yet, to date, no universal robust method exists for being aware of the people in a robot's surroundings. The present article imports into existing social robotic platforms different techniques, some of them classical, and other novel, for detecting, recognizing, and tracking human users. The outputs from the parallel execution of these algorithms are then merged, creating a modular, expandable, and fast architecture. This results in a local user mapping through fusion of multiple user recognition techniques. The different people detectors comply with a common interface called PeoplePoseList Publisher, while the people recognition algorithms meet an interface called PeoplePoseList Matcher. The fusion of all these different modules is based on the Unscented Kalman Filtering technique. Extensive benchmarks of the subcomponents and of the whole architecture demonstrate the validity and interest of all levels of the architecture. In addition, all the software and data sets generated in this work are freely available.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research leading to these results has received funding from several projects: from the project called: Development of social robots to help seniors with cognitive impairment-ROBSEN, funded by the Ministerio de Economía y Competitividad (DPI2014-57684-R); and from the RoboCity2030-III-CM project (S2013/MIT-2748), funded by Programas de Actividades I+D en la Comunidad de Madrid and cofunded by Structural Funds of the EU

    Using a Social Robot as a Gaming Platform

    No full text
    As social robotic research advances, robots are improving their abilities in Human-Robot Interaction and, therefore, becoming more human-friendly. While robots are beginning to interact more naturally with humans, new applications and possible uses of social robots are appearing. One of the future applications where robots will be used is entertainment. This paper presents a social robot as a development platform for which several robotic games have been developed. Five of these games are presented and how these games take benefit of the robot’s HRI abilities is detailed.Publicad
    corecore